A novel loss function for the overall risk criterion based discriminative training of HMM models
نویسندگان
چکیده
In this paper, 1 we propose a novel loss function for the overall risk criterion estimation of hidden Markov models. For continuous speech recognition, the overall risk criterion estimation with the proposed loss function aims to directly maximise word recognition accuracy on the training database. We propose reestimation equations for the HMM parameters, which are derived using the Extended Baum-Welch algorithm. Using HMM, trained with the proposed method, a decrease of word recognition error rate of up to 17.3% has been achieved for the phoneme recognition task on the TIMIT database.
منابع مشابه
Maximum F1-Score Discriminative Training for Automatic Mispronunciation Detection in Computer-Assisted Language Learning
In this paper, we propose and evaluate a novel discriminative training criterion for hidden Markov model (HMM) based automatic mispronunciation detection in computer-assisted pronunciation training. The objective function is formulated as a smooth form of the F1-score on the annotated non-native speech database. The objective function maximization is achieved by using extended Baum Welch form l...
متن کاملAnti-Models: - An Alternative Way to Discriminative Training
Traditional discriminative training methods modify Hidden Markov Model (HMM) parameters obtained via a Maximum Likelihood (ML) criterion based estimator. In this paper, anti-models are introduced instead. The anti-models are used in tandem with ML models to incorporate a discriminative information from training data set and modify the HMM output likelihood in a discriminative way. Traditional d...
متن کاملMaximal rank likelihood as an optimization function for speech recognition
Research has shown that rank statistics derived from contextdependent state likelihood can provide robust speech recognition. In previous work, empirical distributions were used to characterize the rank statistics. We present parametric models of the state rank and the rank likelihood, and then based on them, present a new objective function, Maximal Rank Likelihood (MRL), for estimating parame...
متن کاملPosterior-Scaled MPE: Novel Discriminative Training Criteria
We recently discovered novel discriminative training criteria following a principled approach. In this approach training criteria are developed from error bounds on the global error for pattern classification tasks that depend on non-trivial loss functions. Automatic speech recognition (ASR) is a prominent example for such a task depending on the non-trivial Levenshtein loss. In this context, t...
متن کاملA Two-Channel Training Algorithm for Hidden Markov Model and Its Application to Lip Reading
Hidden Markov model (HMM) has been a popular mathematical approach for sequence classification such as speech recognition since 1980s. In this paper, a novel two-channel training strategy is proposed for discriminative training of HMM. For the proposed training strategy, a novel separable-distance function that measures the difference between a pair of training samples is adopted as the criteri...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000